12 research outputs found

    ComBos: a complete simulator of volunteer computing and desktop grids

    Get PDF
    Volunteer Computing is a type of distributed computing in which ordinary people donate their idle computer time to science projects like SETI@Home, Climateprediction.net and many others. In a similar way, Desktop Grid Computing is a form of distributed computing in which an organization uses its existing computers to handle its own long-running computational tasks. BOINC is the main middleware that provides a software platform for Volunteer Computing and desktop grid computing, and it became generalized as a platform for distributed applications in areas as diverse as mathematics, medicine, molecular biology, climatology, environmental science, and astrophysics. In this paper we present a complete simulator of BOINC infrastructures, called ComBoS. Although there are other BOINC simulators, none of them allow us to simulate the complete infrastructure of BOINC. Our goal was to create a complete simulator that, unlike the existing ones, could simulate realistic scenarios taking into account the whole BOINC infrastructure, that other simulators do not consider: projects, servers, network, redundant computing, scheduling, and volunteer nodes. The outputs of the simulations allow us to analyze a wide range of statistical results, such as the throughput of each project, the number of jobs executed by the clients, the total credit granted and the average occupation of the BOINC servers. The paper describes the design of ComBoS and the results of the validation performed. This validation compares the results obtained in ComBoS with the real ones of three different BOINC projects (Einstein@Home, SETI@Home and LHC@Home). Besides, we analyze the performance of the simulator in terms of memory usage and execution time. The paper also shows that our simulator can guide the design of BOINC projects, describing some case studies using ComBoS that could help designers verify the feasibility of BOINC projects. (C) 2017 Elsevier B.V. All rights reserved.This work has been partially supported by the Spanish MINISTERIO DE ECONOMÍA Y COMPETITIVIDAD under the project grant TIN2016-79637-P TOWARDS UNIFICATION OF HPC AND BIG DATA PARADIGMS

    A heterogeneous mobile cloud computing model for hybrid clouds

    Get PDF
    Mobile cloud computing is a paradigm that delivers applications to mobile devices by using cloud computing. In this way, mobile cloud computing allows for a rich user experience; since client applications run remotely in the cloud infrastructure, applications use fewer resources in the user's mobile devices. In this paper, we present a new mobile cloud computing model, in which platforms of volunteer devices provide part of the resources of the cloud, inspired by both volunteer computing and mobile edge computing paradigms. These platforms may be hierarchical, based on the capabilities of the volunteer devices and the requirements of the services provided by the clouds. We also describe the orchestration between the volunteer platform and the public, private or hybrid clouds. As we show, this new model can be an inexpensive solution to different application scenarios, highlighting its benefits in cost savings, elasticity, scalability, load balancing, and efficiency. Moreover, with the evaluation performed we also show that our proposed model is a feasible solution for cloud services that have a large number of mobile users. (C) 2018 Elsevier B.V. All rights reserved.This work has been partially supported by the Spanish MINISTERIO DE ECONOMÍA Y COMPETITIVIDAD under the project grant TIN2016-79637-P TOWARDS UNIFICATION OF HPC AND BIG DATA PARADIGMS

    Enhancing the power of two choices load balancing algorithm using round robin policy

    Get PDF
    This paper proposes a new version of the power of two choices, SQ(d), load balancing algorithm. This new algorithm improves the performance of the classical model based on the power of two choices randomized load balancing. This model considers jobs that arrive at a dispatcher as a Poisson stream of rate lambdan,lambda<1, at a set of n servers. Using the power of two choices, the dispatcher chooses some d constant for each job independently and uniformly from the n servers in a random way and sends the job to the server with the fewest number of jobs. This algorithm offers an advantage over the load balancing based on shortest queue discipline, because it provides good performance and reduces the overhead in the servers and the communication network. In this paper, we propose a new version, shortest queue of d with randomization and round robin policies, SQ-RR(d). This new algorithm combines randomization techniques and static local balancing based on a round-robin policy. In this new version, the dispatcher chooses the d servers as follows: one is selected using a round-robin policy, and the d&#8722;1 servers are chosen independently and uniformly from the n servers in a random way. Then, the dispatcher sends the job to the server with the fewest number of jobs. We demonstrate with a theoretical approximation of this approach that this new version improves the performance obtained with the classical solution in all situations, including systems at 99% capacity. Furthermore, we provide simulations that demonstrate the theoretical approximation developed.This work was partially supported by the Project ‘‘CABAHLA-CM: Convergencia Big data-Hpc: de los sensores a las Aplicaciones’’ S2018/TCS-4423 from Madrid Regional Government

    A new volunteer computing model for data-intensive applications

    Get PDF
    Volunteer computing is a type of distributed computing in which ordinary people donate computing resources to scientific projects. BOINC is the main middleware system for this type of distributed computing. The aim of volunteer computing is that organizations be able to attain large computing power thanks to the participation of volunteer clients instead of a high investment in infrastructure. There are projects, like the ATLAS&#64;Home project, in which the number of running jobs has reached a plateau, due to a high load on data servers caused by file transfer. This is why we have designed an alternative, using the same BOINC infrastructure, in order to improve the performance of BOINC projects that have reached their limit due to the I/O bottleneck in data servers. This alternative involves having a percentage of the volunteer clients running as data servers, called data volunteers, that improve the performance of the system by reducing the load on data servers. In addition, our solution takes advantage of data locality, leveraging the low network latencies of closer machines. This paper describes our alternative in detail and shows the performance of the solution, applied to 3 different BOINC projects, using a simulator of our own, ComBoS.Spanish MINISTERIO DE ECONOMÍA Y COMPETITIVIDAD, Grant/Award Number: TIN2016-79637-

    Machine learning applied to accelerate energy consumption models in computing simulators

    Get PDF
    The ever-increasing growth of data centres and fog resources makes difficult for current simulation frameworks to model large computing infrastructures. Therefore, a major trade-off for simulators is the balance between abstraction level of the models, the scalability, and the performance of the executions. In order to balance better these, early forays can be found in the literature in which AI techniques are applied, but either lack of generality or are tailored to specific simulation frameworks. This paper describes the methodology to integrate memoization as a technique of supervised learning into any computing simulators framework. In this process, a bespoke kernel was constructed for the analysis of the energy models used in most well known computing simulators -cloud and fog-, but also to avoid simulation overhead. Finally, a detailed evaluation of energy models and its performance is presented showing the impact of applying supervised learning to computing simulator, showing performance improvements when models are more accurate and computations are dense

    Wepsim: an online interactive educational simulator integrating microdesign, microprogramming, and assembly language programming

    Get PDF
    Our educational project has three primary goals. First, we want to provide a robust vision of how hardware and software interplay, by integrating the design of an instruction set (through microprogramming) and using that instruction set for assembly programming. Second, we wish to offer a versatile and interactive tool where the previous integrated vision could be tested. The tool we have developed to achieve this is called WepSIM and it provides the view of an elemental processor together with a microprogrammed subset of the MIPS instruction set. In addition, WepSIM is flexible enough to be adapted to other instruction sets or hardware components (e.g., ARM or x86). Third, we want to extend the activities of our university courses, labs, and lectures (fixed hours in a fixed place), so that students may learn by using their mobile device at any location, and at any time during the day. This article presents how WepSIM has improved the teaching of Computer Structure courses by empowering students with a more dynamic and guided learning process. In this paper, we show the results obtained during the experience of using the simulator in the Computer Structure course of the Bachelor&#39;s Degree in Computer Science and Engineering at University Carlos III of Madrid

    WepSIM: Simulador modular e interactivo de un procesador elemental para facilitar una visión integrada de la microprogramación y la programación en ensamblador

    Get PDF
    Lo que diferencia WepSIM de otros simuladores usados en la enseñanza de Estructura de Computadores está en tres aspectos importantes. Primero, ofrece una visión integrada de la microprogramación y de la programación en ensamblador, dando la posibilidad de trabajar con distintos juegos de instrucciones. Segundo, permite al estudiante una mayor movilidad al poder usarse también en dispositivos móviles. Tercero, tiene un diseño modular, es posible añadir, quitar o modificar los elementos existentes. Busca un equilibrio entre simplicidad, para facilitar la enseñanza, y el detalle, para imitar la realidad. Una de las grandes ventajas del simulador WepSIM es que no está limitado a un juego de instrucciones concreto, permitiendo definir un amplio juego de instrucciones de procesadores reales o inventados. Este artículo describe WepSIM y los resultados en la primera experiencia de su uso en la asignatura de Estructura de Computadores impartida en el Grado en Ingeniería Informática de la Universidad Carlos III de Madrid.There are three differences between WepSIM1 and other simulation tools used in Computer Structure Teaching. First one, it offers an integrated vision of microprogramming and assembly programming where different instructions sets can be defined. Second one, it provides more flexibility to students by using mobile devices. Third one, it was created with a modular design. It is possible to add, remove or modify the existing elements, and it is simple enough to be used for teaching but with enough details to mimic the reality. This paper introduces WepSIM, and the results obtained in the first experience of usage in the Computer Structure course from the Bachelor’s Degree in Computer Science and Engineering in the Universidad Carlos III de Madrid.Universidad de Granada: Departamento de Arquitectura y Tecnología de Computadores; Vicerrectorado para la Garantía de la Calidad

    Improving performance using computational compression through memoization: A case study using a railway power consumption simulator

    Get PDF
    The objective of data compression is to avoid redundancy in order to reduce the size of the data to be stored or transmitted. In some scenarios, data compression may help to increase global performance by reducing the amount of data at a competitive cost in terms of global time and energy consumption. We have introduced computational compression as a technique for reducing redundant computation, in other words, to avoid carrying out the same computation with the same input to obtain the same output. In some scenarios, such as simulations, graphic processing, and so on, part of the computation is repeated using the same input in order to obtain the same output, and this computation could have an important cost in terms of global time and energy consumption. We propose applying computational compression by using memoization in order to store the results for future reuse and, in this way, minimize the use of the same costly computation. Although memoization was proposed for sequential applications in the 1980s, and there are some projects that have applied it in very specific domains, we propose a novel, domain-independent way of using it in high-performance applications, as a means of avoiding redundant computation.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under the project TIN2013-41350-P (Scalable Data Management Techniques for High-End Computing Systems)

    Incremento de prestaciones en el acceso en Grid de datos

    Get PDF
    Ponencias de las Decimosextas Jornadas de Paralelismo celebradas del 13 al 16 de septiembre de 2005 en GranadaEl modelo de computación Grid ha evolucionado en los últimos años para proporcionar un entorno de computación de altas prestaciones en redes de área amplia. Sin embargo, uno de los mayores problemas se encuentra en las aplicaciones que hacen uso intensivo y masivo de datos. Como solución a los problemas de estas aplicaciones se ha utilizado la replicación. Sin embargo, la replicación clásica adolece de ciertos problemas como la adaptabilidad y la alta latencia del nuevo entorno. Por ello se propone un nuevo algoritmo de replicación y organización de datos que proporciona un acceso de altas prestaciones en un Data Grid.Publicad
    corecore